Globally convergent homotopy methods: A tutorial
نویسندگان
چکیده
منابع مشابه
Globally Convergent Inexact Newton Methods
Inexact Newton methods for finding a zero of F 1 1 are variations of Newton's method in which each step only approximately satisfies the linear Newton equation but still reduces the norm of the local linear model of F. Here, inexact Newton methods are formulated that incorporate features designed to improve convergence from arbitrary starting points. For each method, a basic global convergence ...
متن کاملGlobally convergent DC trust-region methods
In this paper, we investigate the use of DC (Difference of Convex functions) models and algorithms in the solution of nonlinear optimization problems by trust-region methods. We consider DC local models for the quadratic model of the objective function used to compute the trust-region step, and apply a primal-dual subgradient method to the solution of the corresponding trust-region subproblems....
متن کاملA Globally Convergent Probability-One Homotopy for Linear Programs with Linear Complementarity Constraints
A solution of the standard formulation of a linear program with linear complementarity constraints (LPCC) does not satisfy a constraint qualification. A family of relaxations of an LPCC, associated with a probability-one homotopy map, proposed here is shown to have several desirable properties. The homotopy map is nonlinear, replacing all the constraints with nonlinear relaxations of NCP functi...
متن کاملOptimization Based Globally Convergent Methods for the Nonlinear Complementarity Problem
The nonlinear complementarity problem has been used to study and formulate various equilibrium problems including the traffic equilibrium problem, the spatial equilibrium problem and the Nash equilibrium problem. To solve the nonlinear complementarity problem, various iterative methods such as projection methods, linearized methods and Newton method have been proposed and their convergence resu...
متن کاملSub-Sampled Newton Methods I: Globally Convergent Algorithms
Large scale optimization problems are ubiquitous in machine learning and data analysis and there is a plethora of algorithms for solving such problems. Many of these algorithms employ sub-sampling, as a way to either speed up the computations and/or to implicitly implement a form of statistical regularization. In this paper, we consider second-order iterative optimization algorithms, i.e., thos...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Applied Mathematics and Computation
سال: 1989
ISSN: 0096-3003
DOI: 10.1016/0096-3003(89)90129-x